Gradient behavior without gradient underlying representations: the case of French liaison

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Gradient Symbolic Representations in Grammar: The case of French Liaison

Longstanding theoretical debates about whether structure A or structure B is the correct analysis of phenomenon X are commonplace. For example, at the juncture of two words W1 and W2, French liaison consonants alternate with zero. Theories of French phonology have long debated whether the consonant is associated with W1 or W2. In this work, we argue for an alternative approach. Phenomena X is n...

متن کامل

French schwa and gradient cumulativity

We model the interaction of two phonological factors that condition French schwa alternations: schwa is more likely after two consonants; and schwa is more likely in the penultimate syllable. Using new data from a judgment study, we show that both factors play a role in schwa epenthesis and deletion, confirming previous impressionistic descriptions, and that the two factors interact cumulativel...

متن کامل

extremal region detection guided by maxima of gradient magnitude

a problem of computer vision applications is to detect regions of interest under dif- ferent imaging conditions. the state-of-the-art maximally stable extremal regions (mser) detects affine covariant regions by applying all possible thresholds on the input image, and through three main steps including: 1) making a component tree of extremal regions’ evolution (enumeration), 2) obtaining region ...

Gradient representations and the perception of luminosity

The neuronal mechanisms that serve to distinguish between light emitting and light reflecting objects are largely unknown. It has been suggested that luminosity perception implements a separate pathway in the visual system, such that luminosity constitutes an independent perceptual feature. Recently, a psychophysical study was conducted to address the question whether luminosity has a feature s...

متن کامل

Learning to Learn without Gradient Descent by Gradient Descent

We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-paramete...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the Annual Meetings on Phonology

سال: 2020

ISSN: 2377-3324

DOI: 10.3765/amp.v8i0.4650